Attention-Aligned Transformer for Image Captioning
نویسندگان
چکیده
Recently, attention-based image captioning models, which are expected to ground correct regions for proper word generations, have achieved remarkable performance. However, some researchers argued “deviated focus” problem of existing attention mechanisms in determining the effective and influential features. In this paper, we present A2 - an attention-aligned Transformer captioning, guides learning a perturbation-based self-supervised manner, without any annotation overhead. Specifically, add mask operation on through learnable network estimate true function ultimate description generation. We hypothesize that necessary region features, where small disturbance causes obvious performance degradation, deserve more weight. Then, propose four aligned strategies use information refine weight distribution. Under such pattern, attended correctly with output words. Extensive experiments conducted MS COCO dataset demonstrate proposed consistently outperforms baselines both automatic metrics human evaluation. Trained models code reproducing publicly available.
منابع مشابه
Image Captioning with Attention
In the past few years, neural networks have fueled dramatic advances in image classi cation. Emboldened, researchers are looking for more challenging applications for computer vision and arti cial intelligence systems. They seek not only to assign numerical labels to input data, but to describe the world in human terms. Image and video captioning is among the most popular applications in this t...
متن کاملImage Captioning using Visual Attention
This project aims at generating captions for images using neural language models. There has been a substantial increase in number of proposed models for image captioning task since neural language models and convolutional neural networks(CNN) became popular. Our project has its base on one of such works, which uses a variant of Recurrent neural network coupled with a CNN. We intend to enhance t...
متن کاملText-Guided Attention Model for Image Captioning
Visual attention plays an important role to understand images and demonstrates its effectiveness in generating natural language descriptions of images. On the other hand, recent studies show that language associated with an image can steer visual attention in the scene during our cognitive process. Inspired by this, we introduce a text-guided attention model for image captioning, which learns t...
متن کاملAttention Correctness in Neural Image Captioning
Attention Map Visualization We visualize the attention maps of both the implicit attention model and our supervised attention model on the Flickr30k test set. As mentioned in the paper, 909 noun phrases are aligned for the implicit model and 901 for the supervised model. 635 of these alignments are common for both, and 595 of them have corresponding bounding boxes. Here we present a subset due ...
متن کاملSocial Image Captioning: Exploring Visual Attention and User Attention
Image captioning with a natural language has been an emerging trend. However, the social image, associated with a set of user-contributed tags, has been rarely investigated for a similar task. The user-contributed tags, which could reflect the user attention, have been neglected in conventional image captioning. Most existing image captioning models cannot be applied directly to social image ca...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2022
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v36i1.19940